-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(robot-server): move run time params to their own DB table #15650
Conversation
…erification process and database table
A PR has been opened to address analyses snapshot changes. Please review the changes here: https://github.com/Opentrons/opentrons/pull/ |
A PR has been opened to address analyses snapshot changes. Please review the changes here: https://github.com/Opentrons/opentrons/pull/ |
A PR has been opened to address analyses snapshot changes. Please review the changes here: https://github.com/Opentrons/opentrons/pull/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Working my way through this, looks great, thanks! Here are some requests.
sqlalchemy.Column( | ||
"run_time_parameter_values_and_defaults", | ||
"parameter_variable_name", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might make sense to throw in unique=True
on the parameter_variable_name
column.
Whoops, no, sorry, definitely not just on the parameter_variable_name
column. It would need to be on the "analysis_id", "parameter_variable_name"
combination.
This is just a nice-to-have.
schema_5.run_command_table, | ||
source_transaction, | ||
dest_transaction, | ||
order_by_rowid=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: I think this is harmless, but order_by_rowid
can be False
for run_command_table
because it has explicitly-declared row_id
and index_in_run
columns.
I wonder if order_by_rowid=True
is actually any slower or more memory-hungry than =False
. Maybe it should just always be True
to simplify things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Final set of comments.
A PR has been opened to address analyses snapshot changes. Please review the changes here: https://github.com/Opentrons/opentrons/pull/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Notes from meeting:
- Update the persistence snapshot tests to include RTP checks.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## edge #15650 +/- ##
===========================================
+ Coverage 63.25% 92.43% +29.18%
===========================================
Files 287 77 -210
Lines 15013 1283 -13730
===========================================
- Hits 9497 1186 -8311
+ Misses 5516 97 -5419
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I'm always for breaking stuff into more steps to better mirror semantics. I have a couple things I'd love to see changed but if stuff depends on this getting merged it's ok:
- Remove some null checks by slightly reorganizing
- Make remaining checks not just
assert
s but actual errors - Just saying,
Scalar
feels like a better name thanPrimitive
import robot_server.errors.error_mappers as em | ||
|
||
|
||
class FailedToInitializeAnalyzer(Exception): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be an enumerated error so it can carry more context?
parameter, | ||
prev_value_and_default, | ||
) in rtp_values_and_defaults_in_last_analysis.items(): | ||
assert set(param.variableName for param in new_parameters) == set( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure about this one? I really don't like using assert
in this way in production code. If we're going to be paranoid, why not have this function (1) log something big and then (2) just return False
, since if these parameter names don't match then they're definitionally different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sfoster1 can you explain why you don't like using assert
in this way?
I prefer assert rather than silently logging because it makes sure that the error is actually brought to attention. It's a practice we've followed in most of robot server and engine code. If we only log the error and continue running (by returning False
in this case), then the server might run into some other error down the line because failing this assertion means that there's a bug in our code somewhere else. So it's better to fail right away and know why it failed rather than failing a little later with a possibly unclear reason.
@@ -407,22 +409,22 @@ def add( | |||
self, | |||
protocol_id: str, | |||
analysis_id: str, | |||
) -> PendingAnalysis: | |||
run_time_parameters: List[RunTimeParameter], | |||
) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why change this? feels like a nice convenience thing to return the newly created object, and it's a smaller diff
with self._sql_engine.begin() as transaction: | ||
results = transaction.execute(statement).all() | ||
|
||
rtps: Dict[str, PrimitiveAllowedTypes] = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
resources = [PrimitiveParameterResource.from_sql_row(row) for row in results]
rtps = {param. parameter_variable_name: param.parameter_value for param in resources}
? Means you don't have to explicitly type rtps
@@ -34,45 +37,57 @@ def __init__( | |||
"""Initialize the analyzer and its dependencies.""" | |||
self._analysis_store = analysis_store | |||
self._protocol_resource = protocol_resource | |||
self._orchestrator: Optional[RunOrchestrator] = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking around the code, it seems like the first thing you always do with a protocol analyzer is call load_runner
(now load_orchestrator
) with the RTPs, including in this pr. Given that, instead of having load_orchestrator
can we
- Require an orchestrator instance as an initializer param, meaning it's not-null by construction
- make
create_protocol_analyzer
take the optionalrun_time_param
values and build this there
That way we don't need all these asserts and we make it impossible to call get_verified_run_time_parameters
in an illegal way.
Thanks for the feedback @sfoster1. I'll address them in a follow-up PR since this one's quite overdue for merging. |
Closes AUTH-527
Overview
This PR moves primitive RTPs to their own table and refactors the analysis-generation process. This entire refactor makes it easier to add on CSV-parameter handling.
Architecture and database changes:
Here's what the new analysis flow looks like on a high level-
Test Plan
Changelog
AnalysesManager
,ProtocolAnalyzer
,AnalysisStore
and protocolsrouter
to use the new flow as shown above.Review requests
Risk assessment
Medium-high. It changes quite a critical part of the protocol upload and analysis process. But the fact that all integration tests and analyses snapshots are passing makes me feel quite confident that the existing behavior of these endpoints has not changed by this work.